Unsupervised Multi-modal Hashing for Cross-Modal Retrieval

نویسندگان

چکیده

The explosive growth of multimedia data on the Internet has magnified challenge information retrieval. Multimedia usually emerges in different modalities, such as image, text, video, and audio. Unsupervised cross-modal hashing techniques that support searching among multi-modal have gained importance large-scale retrieval tasks because advantage low storage cost high efficiency. Current methods learn hash functions by transforming high-dimensional into discrete codes. However, original manifold structure semantic correlation are not preserved well compact We propose a novel unsupervised method to cope with this problem from two perspectives. On one hand, textual space locally geometric visual reconstructed unified features seamlessly simultaneously. other \(\ell _{2,1}\)-norm penalties imposed projection matrices separately relevant discriminative experimental results indicate our proposed achieves an improvement 1%, 6%, 9%, 2% over best comparison four publicly available datasets (WiKi, PASCAL-VOC, UCI Handwritten Digit, NUS-WIDE), respectively. In conclusion, framework which combines learning multimodal graph embedding is effective codes superior performance compared state-of-the-art methods.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Unsupervised Generative Adversarial Cross-modal Hashing

Cross-modal hashing aims to map heterogeneous multimedia data into a common Hamming space, which can realize fast and flexible retrieval across different modalities. Unsupervised cross-modal hashing is more flexible and applicable than supervised methods, since no intensive labeling work is involved. However, existing unsupervised methods learn hashing functions by preserving inter and intra co...

متن کامل

Self-Supervised Adversarial Hashing Networks for Cross-Modal Retrieval

Thanks to the success of deep learning, cross-modal retrieval has made significant progress recently. However, there still remains a crucial bottleneck: how to bridge the modality gap to further enhance the retrieval accuracy. In this paper, we propose a self-supervised adversarial hashing (SSAH) approach, which lies among the early attempts to incorporate adversarial learning into cross-modal ...

متن کامل

Pairwise Relationship Guided Deep Hashing for Cross-Modal Retrieval

With benefits of low storage cost and fast query speed, crossmodal hashing has received considerable attention recently. However, almost all existing methods on cross-modal hashing cannot obtain powerful hash codes due to directly utilizing hand-crafted features or ignoring heterogeneous correlations across different modalities, which will greatly degrade the retrieval performance. In this pape...

متن کامل

HashGAN: Attention-aware Deep Adversarial Hashing for Cross Modal Retrieval

As the rapid growth of multi-modal data, hashing methods for cross-modal retrieval have received considerable attention. Deep-networks-based cross-modal hashing methods are appealing as they can integrate feature learning and hash coding into end-to-end trainable frameworks. However, it is still challenging to find content similarities between different modalities of data due to the heterogenei...

متن کامل

Correlation Hashing Network for Efficient Cross-Modal Retrieval

Due to the storage and retrieval efficiency, hashing has been widely deployed to approximate nearest neighbor search for large-scale multimedia retrieval. Cross-modal hashing, which improves the quality of hash coding by exploiting the semantic correlation across different modalities, has received increasing attention recently. For most existing cross-modal hashing methods, an object is first r...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Cognitive Computation

سال: 2021

ISSN: ['1866-9964', '1866-9956']

DOI: https://doi.org/10.1007/s12559-021-09847-4